Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 32
Filtrar
1.
Nat Methods ; 21(2): 182-194, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38347140

RESUMEN

Validation metrics are key for tracking scientific progress and bridging the current chasm between artificial intelligence research and its translation into practice. However, increasing evidence shows that, particularly in image analysis, metrics are often chosen inadequately. Although taking into account the individual strengths, weaknesses and limitations of validation metrics is a critical prerequisite to making educated choices, the relevant knowledge is currently scattered and poorly accessible to individual researchers. Based on a multistage Delphi process conducted by a multidisciplinary expert consortium as well as extensive community feedback, the present work provides a reliable and comprehensive common point of access to information on pitfalls related to validation metrics in image analysis. Although focused on biomedical image analysis, the addressed pitfalls generalize across application domains and are categorized according to a newly created, domain-agnostic taxonomy. The work serves to enhance global comprehension of a key topic in image analysis validation.


Asunto(s)
Inteligencia Artificial
2.
Nat Methods ; 21(2): 195-212, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38347141

RESUMEN

Increasing evidence shows that flaws in machine learning (ML) algorithm validation are an underestimated global problem. In biomedical image analysis, chosen performance metrics often do not reflect the domain interest, and thus fail to adequately measure scientific progress and hinder translation of ML techniques into practice. To overcome this, we created Metrics Reloaded, a comprehensive framework guiding researchers in the problem-aware selection of metrics. Developed by a large international consortium in a multistage Delphi process, it is based on the novel concept of a problem fingerprint-a structured representation of the given problem that captures all aspects that are relevant for metric selection, from the domain interest to the properties of the target structure(s), dataset and algorithm output. On the basis of the problem fingerprint, users are guided through the process of choosing and applying appropriate validation metrics while being made aware of potential pitfalls. Metrics Reloaded targets image analysis problems that can be interpreted as classification tasks at image, object or pixel level, namely image-level classification, object detection, semantic segmentation and instance segmentation tasks. To improve the user experience, we implemented the framework in the Metrics Reloaded online tool. Following the convergence of ML methodology across application domains, Metrics Reloaded fosters the convergence of validation methodology. Its applicability is demonstrated for various biomedical use cases.


Asunto(s)
Algoritmos , Procesamiento de Imagen Asistido por Computador , Aprendizaje Automático , Semántica
3.
ArXiv ; 2024 Feb 23.
Artículo en Inglés | MEDLINE | ID: mdl-36945687

RESUMEN

Validation metrics are key for the reliable tracking of scientific progress and for bridging the current chasm between artificial intelligence (AI) research and its translation into practice. However, increasing evidence shows that particularly in image analysis, metrics are often chosen inadequately in relation to the underlying research problem. This could be attributed to a lack of accessibility of metric-related knowledge: While taking into account the individual strengths, weaknesses, and limitations of validation metrics is a critical prerequisite to making educated choices, the relevant knowledge is currently scattered and poorly accessible to individual researchers. Based on a multi-stage Delphi process conducted by a multidisciplinary expert consortium as well as extensive community feedback, the present work provides the first reliable and comprehensive common point of access to information on pitfalls related to validation metrics in image analysis. Focusing on biomedical image analysis but with the potential of transfer to other fields, the addressed pitfalls generalize across application domains and are categorized according to a newly created, domain-agnostic taxonomy. To facilitate comprehension, illustrations and specific examples accompany each pitfall. As a structured body of information accessible to researchers of all levels of expertise, this work enhances global comprehension of a key topic in image analysis validation.

4.
Med Image Anal ; 88: 102844, 2023 08.
Artículo en Inglés | MEDLINE | ID: mdl-37270898

RESUMEN

The field of surgical computer vision has undergone considerable breakthroughs in recent years with the rising popularity of deep neural network-based methods. However, standard fully-supervised approaches for training such models require vast amounts of annotated data, imposing a prohibitively high cost; especially in the clinical domain. Self-Supervised Learning (SSL) methods, which have begun to gain traction in the general computer vision community, represent a potential solution to these annotation costs, allowing to learn useful representations from only unlabeled data. Still, the effectiveness of SSL methods in more complex and impactful domains, such as medicine and surgery, remains limited and unexplored. In this work, we address this critical need by investigating four state-of-the-art SSL methods (MoCo v2, SimCLR, DINO, SwAV) in the context of surgical computer vision. We present an extensive analysis of the performance of these methods on the Cholec80 dataset for two fundamental and popular tasks in surgical context understanding, phase recognition and tool presence detection. We examine their parameterization, then their behavior with respect to training data quantities in semi-supervised settings. Correct transfer of these methods to surgery, as described and conducted in this work, leads to substantial performance gains over generic uses of SSL - up to 7.4% on phase recognition and 20% on tool presence detection - as well as state-of-the-art semi-supervised phase recognition approaches by up to 14%. Further results obtained on a highly diverse selection of surgical datasets exhibit strong generalization properties. The code is available at https://github.com/CAMMA-public/SelfSupSurg.


Asunto(s)
Computadores , Redes Neurales de la Computación , Humanos , Aprendizaje Automático Supervisado
5.
IEEE Trans Med Imaging ; 42(7): 1920-1931, 2023 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-36374877

RESUMEN

Recent advancements in deep learning methods bring computer-assistance a step closer to fulfilling promises of safer surgical procedures. However, the generalizability of such methods is often dependent on training on diverse datasets from multiple medical institutions, which is a restrictive requirement considering the sensitive nature of medical data. Recently proposed collaborative learning methods such as Federated Learning (FL) allow for training on remote datasets without the need to explicitly share data. Even so, data annotation still represents a bottleneck, particularly in medicine and surgery where clinical expertise is often required. With these constraints in mind, we propose FedCy, a federated semi-supervised learning (FSSL) method that combines FL and self-supervised learning to exploit a decentralized dataset of both labeled and unlabeled videos, thereby improving performance on the task of surgical phase recognition. By leveraging temporal patterns in the labeled data, FedCy helps guide unsupervised training on unlabeled data towards learning task-specific features for phase recognition. We demonstrate significant performance gains over state-of-the-art FSSL methods on the task of automatic recognition of surgical phases using a newly collected multi-institutional dataset of laparoscopic cholecystectomy videos. Furthermore, we demonstrate that our approach also learns more generalizable features when tested on data from an unseen domain.


Asunto(s)
Aprendizaje Automático Supervisado , Procedimientos Quirúrgicos Operativos , Grabación en Video
6.
Nat Mach Intell ; 5(7): 799-810, 2023 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38706981

RESUMEN

Medical artificial intelligence (AI) has tremendous potential to advance healthcare by supporting and contributing to the evidence-based practice of medicine, personalizing patient treatment, reducing costs, and improving both healthcare provider and patient experience. Unlocking this potential requires systematic, quantitative evaluation of the performance of medical AI models on large-scale, heterogeneous data capturing diverse patient populations. Here, to meet this need, we introduce MedPerf, an open platform for benchmarking AI models in the medical domain. MedPerf focuses on enabling federated evaluation of AI models, by securely distributing them to different facilities, such as healthcare organizations. This process of bringing the model to the data empowers each facility to assess and verify the performance of AI models in an efficient and human-supervised process, while prioritizing privacy. We describe the current challenges healthcare and AI communities face, the need for an open platform, the design philosophy of MedPerf, its current implementation status and real-world deployment, our roadmap and, importantly, the use of MedPerf with multiple international institutions within cloud-based technology and on-premises scenarios. Finally, we welcome new contributions by researchers and organizations to further strengthen MedPerf as an open benchmarking platform.

7.
Int J Comput Assist Radiol Surg ; 17(8): 1469-1476, 2022 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-35471624

RESUMEN

PURPOSE: Semantic segmentation and activity classification are key components to create intelligent surgical systems able to understand and assist clinical workflow. In the operating room, semantic segmentation is at the core of creating robots aware of clinical surroundings, whereas activity classification aims at understanding OR workflow at a higher level. State-of-the-art semantic segmentation and activity recognition approaches are fully supervised, which is not scalable. Self-supervision can decrease the amount of annotated data needed. METHODS: We propose a new 3D self-supervised task for OR scene understanding utilizing OR scene images captured with ToF cameras. Contrary to other self-supervised approaches, where handcrafted pretext tasks are focused on 2D image features, our proposed task consists of predicting relative 3D distance of image patches by exploiting the depth maps. By learning 3D spatial context, it generates discriminative features for our downstream tasks. RESULTS: Our approach is evaluated on two tasks and datasets containing multiview data captured from clinical scenarios. We demonstrate a noteworthy improvement in performance on both tasks, specifically on low-regime data where utility of self-supervised learning is the highest. CONCLUSION: We propose a novel privacy-preserving self-supervised approach utilizing depth maps. Our proposed method shows performance on par with other self-supervised approaches and could be an interesting way to alleviate the burden of full supervision.


Asunto(s)
Quirófanos , Aprendizaje Automático Supervisado , Humanos
8.
Sci Data ; 8(1): 92, 2021 03 25.
Artículo en Inglés | MEDLINE | ID: mdl-33767191

RESUMEN

We developed a rich dataset of Chest X-Ray (CXR) images to assist investigators in artificial intelligence. The data were collected using an eye-tracking system while a radiologist reviewed and reported on 1,083 CXR images. The dataset contains the following aligned data: CXR image, transcribed radiology report text, radiologist's dictation audio and eye gaze coordinates data. We hope this dataset can contribute to various areas of research particularly towards explainable and multimodal deep learning/machine learning methods. Furthermore, investigators in disease classification and localization, automated radiology report generation, and human-machine interaction can benefit from these data. We report deep learning experiments that utilize the attention maps produced by the eye gaze dataset to show the potential utility of this dataset.


Asunto(s)
Aprendizaje Profundo , Tórax/diagnóstico por imagen , Humanos , Radiografía
9.
NPJ Digit Med ; 4(1): 25, 2021 Feb 15.
Artículo en Inglés | MEDLINE | ID: mdl-33589700

RESUMEN

Image-based teleconsultation using smartphones has become increasingly popular. In parallel, deep learning algorithms have been developed to detect radiological findings in chest X-rays (CXRs). However, the feasibility of using smartphones to automate this process has yet to be evaluated. This study developed a recalibration method to build deep learning models to detect radiological findings on CXR photographs. Two publicly available databases (MIMIC-CXR and CheXpert) were used to build the models, and four derivative datasets containing 6453 CXR photographs were collected to evaluate model performance. After recalibration, the model achieved areas under the receiver operating characteristic curve of 0.80 (95% confidence interval: 0.78-0.82), 0.88 (0.86-0.90), 0.81 (0.79-0.84), 0.79 (0.77-0.81), 0.84 (0.80-0.88), and 0.90 (0.88-0.92), respectively, for detecting cardiomegaly, edema, consolidation, atelectasis, pneumothorax, and pleural effusion. The recalibration strategy, respectively, recovered 84.9%, 83.5%, 53.2%, 57.8%, 69.9%, and 83.0% of performance losses of the uncalibrated model. We conclude that the recalibration method can transfer models from digital CXRs to CXR photographs, which is expected to help physicians' clinical works.

10.
JAMA Netw Open ; 3(10): e2022779, 2020 10 01.
Artículo en Inglés | MEDLINE | ID: mdl-33034642

RESUMEN

Importance: Chest radiography is the most common diagnostic imaging examination performed in emergency departments (EDs). Augmenting clinicians with automated preliminary read assistants could help expedite their workflows, improve accuracy, and reduce the cost of care. Objective: To assess the performance of artificial intelligence (AI) algorithms in realistic radiology workflows by performing an objective comparative evaluation of the preliminary reads of anteroposterior (AP) frontal chest radiographs performed by an AI algorithm and radiology residents. Design, Setting, and Participants: This diagnostic study included a set of 72 findings assembled by clinical experts to constitute a full-fledged preliminary read of AP frontal chest radiographs. A novel deep learning architecture was designed for an AI algorithm to estimate the findings per image. The AI algorithm was trained using a multihospital training data set of 342 126 frontal chest radiographs captured in ED and urgent care settings. The training data were labeled from their associated reports. Image-based F1 score was chosen to optimize the operating point on the receiver operating characteristics (ROC) curve so as to minimize the number of missed findings and overcalls per image read. The performance of the model was compared with that of 5 radiology residents recruited from multiple institutions in the US in an objective study in which a separate data set of 1998 AP frontal chest radiographs was drawn from a hospital source representative of realistic preliminary reads in inpatient and ED settings. A triple consensus with adjudication process was used to derive the ground truth labels for the study data set. The performance of AI algorithm and radiology residents was assessed by comparing their reads with ground truth findings. All studies were conducted through a web-based clinical study application system. The triple consensus data set was collected between February and October 2018. The comparison study was preformed between January and October 2019. Data were analyzed from October to February 2020. After the first round of reviews, further analysis of the data was performed from March to July 2020. Main Outcomes and Measures: The learning performance of the AI algorithm was judged using the conventional ROC curve and the area under the curve (AUC) during training and field testing on the study data set. For the AI algorithm and radiology residents, the individual finding label performance was measured using the conventional measures of label-based sensitivity, specificity, and positive predictive value (PPV). In addition, the agreement with the ground truth on the assignment of findings to images was measured using the pooled κ statistic. The preliminary read performance was recorded for AI algorithm and radiology residents using new measures of mean image-based sensitivity, specificity, and PPV designed for recording the fraction of misses and overcalls on a per image basis. The 1-sided analysis of variance test was used to compare the means of each group (AI algorithm vs radiology residents) using the F distribution, and the null hypothesis was that the groups would have similar means. Results: The trained AI algorithm achieved a mean AUC across labels of 0.807 (weighted mean AUC, 0.841) after training. On the study data set, which had a different prevalence distribution, the mean AUC achieved was 0.772 (weighted mean AUC, 0.865). The interrater agreement with ground truth finding labels for AI algorithm predictions had pooled κ value of 0.544, and the pooled κ for radiology residents was 0.585. For the preliminary read performance, the analysis of variance test was used to compare the distributions of AI algorithm and radiology residents' mean image-based sensitivity, PPV, and specificity. The mean image-based sensitivity for AI algorithm was 0.716 (95% CI, 0.704-0.729) and for radiology residents was 0.720 (95% CI, 0.709-0.732) (P = .66), while the PPV was 0.730 (95% CI, 0.718-0.742) for the AI algorithm and 0.682 (95% CI, 0.670-0.694) for the radiology residents (P < .001), and specificity was 0.980 (95% CI, 0.980-0.981) for the AI algorithm and 0.973 (95% CI, 0.971-0.974) for the radiology residents (P < .001). Conclusions and Relevance: These findings suggest that it is possible to build AI algorithms that reach and exceed the mean level of performance of third-year radiology residents for full-fledged preliminary read of AP frontal chest radiographs. This diagnostic study also found that while the more complex findings would still benefit from expert overreads, the performance of AI algorithms was associated with the amount of data available for training rather than the level of difficulty of interpretation of the finding. Integrating such AI systems in radiology workflows for preliminary interpretations has the potential to expedite existing radiology workflows and address resource scarcity while improving overall accuracy and reducing the cost of care.


Asunto(s)
Inteligencia Artificial/normas , Internado y Residencia/normas , Interpretación de Imagen Radiográfica Asistida por Computador/normas , Tórax/diagnóstico por imagen , Algoritmos , Área Bajo la Curva , Inteligencia Artificial/estadística & datos numéricos , Humanos , Internado y Residencia/métodos , Internado y Residencia/estadística & datos numéricos , Calidad de la Atención de Salud/normas , Calidad de la Atención de Salud/estadística & datos numéricos , Curva ROC , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Interpretación de Imagen Radiográfica Asistida por Computador/estadística & datos numéricos , Radiografía/instrumentación , Radiografía/métodos
11.
IEEE J Biomed Health Inform ; 23(6): 2211-2219, 2019 11.
Artículo en Inglés | MEDLINE | ID: mdl-29994623

RESUMEN

Robotic endoscopic systems offer a minimally invasive approach to the examination of internal body structures, and their application is rapidly extending to cover the increasing needs for accurate therapeutic interventions. In this context, it is essential for such systems to be able to perform measurements, such as measuring the distance traveled by a wireless capsule endoscope, so as to determine the location of a lesion in the gastrointestinal tract, or to measure the size of lesions for diagnostic purposes. In this paper, we investigate the feasibility of performing contactless measurements using a computer vision approach based on neural networks. The proposed system integrates a deep convolutional image registration approach and a multilayer feed-forward neural network into a novel architecture. The main advantage of this system, with respect to the state-of-the-art ones, is that it is more generic in the sense that it is 1) unconstrained by specific models, 2) more robust to nonrigid deformations, and 3) adaptable to most of the endoscopic systems and environment, while enabling measurements of enhanced accuracy. The performance of this system is evaluated under ex vivo conditions using a phantom experimental model and a robotically assisted test bench. The results obtained promise a wider applicability and impact in endoscopy in the era of big data.


Asunto(s)
Endoscopía Capsular/métodos , Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador/métodos , Redes Neurales de la Computación , Algoritmos , Diseño de Equipo , Humanos , Fantasmas de Imagen , Robótica
12.
J Med Syst ; 42(8): 146, 2018 Jun 29.
Artículo en Inglés | MEDLINE | ID: mdl-29959539

RESUMEN

To detect pulmonary abnormalities such as Tuberculosis (TB), an automatic analysis and classification of chest radiographs can be used as a reliable alternative to more sophisticated and technologically demanding methods (e.g. culture or sputum smear analysis). In target areas like Kenya TB is highly prevalent and often co-occurring with HIV combined with low resources and limited medical assistance. In these regions an automatic screening system can provide a cost-effective solution for a large rural population. Our completely automatic TB screening system is processing the incoming CXRs (chest X-ray) by applying image preprocessing techniques to enhance the image quality followed by an adaptive segmentation based on model selection. The delineated lung regions are described by a multitude of image features. These characteristics are than optimized by a feature selection strategy to provide the best description for the classifier, which will later decide if the analyzed image is normal or abnormal. Our goal is to find the optimal feature set from a larger pool of generic image features, -used originally for problems such as object detection, image retrieval, etc. For performance evaluation measures such as under the curve (AUC) and accuracy (ACC) were considered. Using a neural network classifier on two publicly available data collections, -namely the Montgomery and the Shenzhen dataset, we achieved the maximum area under the curve and accuracy of 0.99 and 97.03%, respectively. Further, we compared our results with existing state-of-the-art systems and to radiologists' decision.


Asunto(s)
Algoritmos , Radiografía , Tuberculosis/diagnóstico por imagen , Automatización , Humanos , Tamizaje Masivo , Esputo
13.
Int J Comput Assist Radiol Surg ; 11(1): 99-106, 2016 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-26092662

RESUMEN

PURPOSE: To improve detection of pulmonary and pleural abnormalities caused by pneumonia or tuberculosis (TB) in digital chest X-rays (CXRs). METHODS: A method was developed and tested by combining shape and texture features to classify CXRs into two categories: TB and non-TB cases. Based on observation that radiologist interpretation is typically comparative: between left and right lung fields, the algorithm uses shape features to describe the overall geometrical characteristics of the lung fields and texture features to represent image characteristics inside them. RESULTS: Our algorithm was evaluated on two different datasets containing tuberculosis and pneumonia cases. CONCLUSIONS: Using our proposed algorithm, we were able to increase the overall performance, measured as area under the (ROC) curve (AUC) by 2.4 % over our previous work.


Asunto(s)
Pulmón/diagnóstico por imagen , Neumonía/diagnóstico por imagen , Radiografía Torácica/métodos , Tuberculosis/diagnóstico por imagen , Algoritmos , Humanos
14.
World J Gastroenterol ; 21(17): 5119-30, 2015 May 07.
Artículo en Inglés | MEDLINE | ID: mdl-25954085

RESUMEN

Currently, the major problem of all existing commercial capsule devices is the lack of control of movement. In the future, with an interface application, the clinician will be able to stop and direct the device into points of interest for detailed inspection/diagnosis, and therapy delivery. This editorial presents current commercially-available new designs, European projects and delivery capsule and gives an overview of the progress required and progress that will be achieved -according to the opinion of the authors- in the next 5 year leading to 2020.


Asunto(s)
Endoscopios en Cápsulas/tendencias , Endoscopía Capsular/tendencias , Tecnología Inalámbrica/tendencias , Endoscopía Capsular/instrumentación , Endoscopía Capsular/métodos , Diseño de Equipo , Predicción , Humanos , Miniaturización , Nanoestructuras , Nanotecnología/tendencias , Factores de Tiempo
15.
IEEE Trans Biomed Eng ; 62(1): 352-60, 2015 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-25167544

RESUMEN

In this paper, we propose a platform to achieve accurate localization of small-bowel lesions and endoscopic video stabilization in wireless capsule endoscopy. Current research modules rely on the use of external magnetic fields and triangulation methods to calculate the position vector of the capsule, leading to considerable error margins. Our platform, entitled OdoCapsule (a synthesis of the words Odometer and Capsule), provides real-time distance information from the point of duodenal entry to the point of exit from the small bowel. To achieve this, OdoCapsule is equipped with three miniature legs. Each leg carries a soft rubber wheel, which is made with human-compliant material. These legs are extendable and retractable thanks to a micromotor and three custom-made torsion springs. The wheels are specifically designed to function as microodometers: each rotation they perform is registered. Hence, the covered distance is measured accurately in real time. Furthermore, with its legs fully extended, OdoCapsule can stabilize itself inside the small-bowel lumen thus offering smoother video capture and better image processing. Recent ex vivo testing of this concept, using porcine small bowel and a commercially available (custom-modified) capsule endoscope, has proved its viability.


Asunto(s)
Endoscopía Capsular/instrumentación , Aumento de la Imagen/instrumentación , Almacenamiento y Recuperación de la Información , Intestino Delgado/patología , Procesamiento de Señales Asistido por Computador/instrumentación , Tecnología Inalámbrica/instrumentación , Animales , Diseño de Equipo , Análisis de Falla de Equipo , Reproducibilidad de los Resultados , Sensibilidad y Especificidad , Porcinos
16.
Expert Rev Gastroenterol Hepatol ; 9(2): 217-35, 2015 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-25169106

RESUMEN

This review presents issues pertaining to lesion detection in small-bowel capsule endoscopy (SBCE). The use of prokinetics, chromoendoscopy, diagnostic yield indicators, localization issues and the use of 3D reconstruction are presented. The authors also review the current status (and future expectations) in automatic lesion detection software development. Automatic lesion detection and reporting, and development of an accurate lesion localization system are the main software challenges of our time. The 'smart', selective and judicious use (before as well as during SBCE) of prokinetics in combination with other modalities (such as real time and/or purge) improves the completion rate of SBCE. The tracking of the capsule within the body is important for the localization of abnormal findings and planning of further therapeutic interventions. Currently, localization is based on transit time. Recently proposed software and hardware solutions are proposed herein. Moreover, the feasibility of software-based 3D representation (attempt for 3D reconstruction) is examined.


Asunto(s)
Endoscopía Capsular/tendencias , Neoplasias Intestinales/diagnóstico , Intestino Delgado/patología , Endoscopía Capsular/efectos adversos , Endoscopía Capsular/métodos , Diseño de Equipo , Humanos , Imagenología Tridimensional , Neoplasias Intestinales/patología , Programas Informáticos
17.
Gastrointest Endosc ; 80(4): 642-651, 2014 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-24998466

RESUMEN

BACKGROUND: In small-bowel capsule endoscopy (SBCE), differentiating masses (ie, lesions of higher probability for neoplasia) requiring more aggressive intervention from bulges (essentially, false-positive findings) is a challenging task; recently, software that enables 3-dimensional (3D) reconstruction has become available. OBJECTIVE: To evaluate whether "coupling" 3D reconstructed video clips with the standard 2-dimensional (s2D) counterparts helps in distinguishing masses from bulges. DESIGN: Three expert and 3 novice SBCE readers, blind to others and in a random order, reviewed the s2D video clips and subsequently the s2D clips coupled with their 3D reconstruction (2D+3D). SETTING: Multicenter study in 3 community hospitals in Italy and a university hospital in Scotland. PATIENTS: Thirty-two deidentified 5-minute video clips, containing mucosal bulging (19) or masses (13). INTERVENTION: 3D reconstruction of s2D SBCE video clips. MAIN OUTCOME MEASURE: Differentiation of masses from bulges with s2D and 2D+3D video clips, estimated by the area under the receiver operating characteristic curve (AUC); interobserver agreement. RESULTS: AUC for experts and novices for s2D video clips was .74 and .5, respectively (P = .0053). AUC for experts and novices with 2D+3D was .70 (compared with s2D: P = .245) and .57 (compared s2D: P = .049), respectively. AUC for experts and novices with 2D+3D was similar (P = .1846). The interobserver agreement was good for both experts and novices with the s2D (k = .71 and .54, respectively) and the 2D+3D video clips (k = .58 in both groups). LIMITATIONS: Few, short video clips; fixed angle of 3D reconstruction. CONCLUSIONS: The adjunction of a 3D reconstruction to the s2D video reading platform does not improve the performance of expert SBCE readers, although it significantly increases the performance of novices in distinguishing masses from bulging.


Asunto(s)
Endoscopía Capsular/métodos , Interpretación de Imagen Asistida por Computador , Imagenología Tridimensional/estadística & datos numéricos , Enfermedades Intestinales/patología , Intestino Delgado/patología , Estudios de Cohortes , Intervalos de Confianza , Diagnóstico Diferencial , Femenino , Humanos , Enfermedades Intestinales/diagnóstico , Neoplasias Intestinales/diagnóstico , Neoplasias Intestinales/patología , Masculino , Variaciones Dependientes del Observador , Curva ROC , Sensibilidad y Especificidad , Grabación en Video
18.
IEEE Trans Med Imaging ; 33(2): 577-90, 2014 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-24239990

RESUMEN

The National Library of Medicine (NLM) is developing a digital chest X-ray (CXR) screening system for deployment in resource constrained communities and developing countries worldwide with a focus on early detection of tuberculosis. A critical component in the computer-aided diagnosis of digital CXRs is the automatic detection of the lung regions. In this paper, we present a nonrigid registration-driven robust lung segmentation method using image retrieval-based patient specific adaptive lung models that detects lung boundaries, surpassing state-of-the-art performance. The method consists of three main stages: 1) a content-based image retrieval approach for identifying training images (with masks) most similar to the patient CXR using a partial Radon transform and Bhattacharyya shape similarity measure, 2) creating the initial patient-specific anatomical model of lung shape using SIFT-flow for deformable registration of training masks to the patient CXR, and 3) extracting refined lung boundaries using a graph cuts optimization approach with a customized energy function. Our average accuracy of 95.4% on the public JSRT database is the highest among published results. A similar degree of accuracy of 94.1% and 91.7% on two new CXR datasets from Montgomery County, MD, USA, and India, respectively, demonstrates the robustness of our lung segmentation approach.


Asunto(s)
Pulmón/anatomía & histología , Intensificación de Imagen Radiográfica/métodos , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Radiografía Torácica/métodos , Algoritmos , Bases de Datos Factuales , Humanos , Modelos Biológicos
19.
IEEE Trans Med Imaging ; 33(2): 233-45, 2014 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-24108713

RESUMEN

Tuberculosis is a major health threat in many regions of the world. Opportunistic infections in immunocompromised HIV/AIDS patients and multi-drug-resistant bacterial strains have exacerbated the problem, while diagnosing tuberculosis still remains a challenge. When left undiagnosed and thus untreated, mortality rates of patients with tuberculosis are high. Standard diagnostics still rely on methods developed in the last century. They are slow and often unreliable. In an effort to reduce the burden of the disease, this paper presents our automated approach for detecting tuberculosis in conventional posteroanterior chest radiographs. We first extract the lung region using a graph cut segmentation method. For this lung region, we compute a set of texture and shape features, which enable the X-rays to be classified as normal or abnormal using a binary classifier. We measure the performance of our system on two datasets: a set collected by the tuberculosis control program of our local county's health department in the United States, and a set collected by Shenzhen Hospital, China. The proposed computer-aided diagnostic system for TB screening, which is ready for field deployment, achieves a performance that approaches the performance of human experts. We achieve an area under the ROC curve (AUC) of 87% (78.3% accuracy) for the first set, and an AUC of 90% (84% accuracy) for the second set. For the first set, we compare our system performance with the performance of radiologists. When trying not to miss any positive cases, radiologists achieve an accuracy of about 82% on this set, and their false positive rate is about half of our system's rate.


Asunto(s)
Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Radiografía Torácica/métodos , Tuberculosis Pulmonar/diagnóstico por imagen , Algoritmos , Humanos , Curva ROC
20.
World J Gastroenterol ; 19(44): 8028-33, 2013 Nov 28.
Artículo en Inglés | MEDLINE | ID: mdl-24307796

RESUMEN

AIM: To evaluate the three-dimensional (3-D) representation performance of 4 publicly available Shape-from-Shading (SfS) algorithms in small-bowel capsule endoscopy (SBCE). METHODS: SfS techniques recover the shape of objects using the gradual variation of shading. There are 4 publicly available SfS algorithms. To the best of our knowledge, no comparative study with images obtained during clinical SBCE has been performed to date. Three experienced reviewers were asked to evaluate 54 two-dimensional (2-D) images (categories: protrusion/inflammation/vascular) transformed to 3-D by the aforementioned SfS 3-D algorithms. The best algorithm was selected and inter-rater agreement was calculated. RESULTS: Four publicly available SfS algorithms were compared. Tsai's SfS algorithm outperformed the rest (selected as best performing in 45/54 SBCE images), followed by Ciuti's algorithm (best performing in 7/54 images) and Torreão's (in 1/54 images). In 26/54 images; Tsai's algorithm was unanimously selected as the best performing 3-D representation SfS software. Tsai's 3-D algorithm superiority was independent of lesion category (protrusion/inflammatory/vascular; P = 0.678) and/or CE system used to obtain the 2-D images (MiroCam/PillCam; P = 0.558). Lastly, the inter-observer agreement was good (kappa = 0.55). CONCLUSION: 3-D representation software offers a plausible alternative for 3-D representation of conventional capsule endoscopy images (until optics technology matures enough to allow hardware enabled-"real" 3-D reconstruction of the gastrointestinal tract).


Asunto(s)
Algoritmos , Endoscopía Capsular , Interpretación de Imagen Asistida por Computador/métodos , Intestino Delgado/patología , Adulto , Humanos , Imagenología Tridimensional , Persona de Mediana Edad , Variaciones Dependientes del Observador , Valor Predictivo de las Pruebas , Reproducibilidad de los Resultados , Programas Informáticos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...